perm filename CHPT.8[LSP,JRA] blob sn#316169 filedate 1977-11-07 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.SEC(Implications of LISP,,,P256:)
C00029 00003	Though LISP is a full twenty years old, it is still a fertile research
C00036 ENDMK
C⊗;
.SEC(Implications of LISP,,,P256:)
.FP
.<<to epilog section of book, end of notes.end>>
Any text which is of the size and extent of this book certainly owes
a word of explanation to its readers;
after {PAGE} pages  it is not fair to turn the page and find the index.
This section will try to summarize what we have accomplished in this book
and will address some of the current research related to LISP-like languages.


It is the author's belief that
LISP should be the first language learned by 
persons interested in computer science.
As a  language for studying algorithms for data structures, it is
presently without peer.  As you have seen, the problems of 
language implementation and their solutions are describable
quite naturally in the implementation of LISP.
As a
programming language, LISP has  powerful features possessed by few
languages,  in particular the uniform representation of program and data.  

We have developed several areas of programming languages and data structures
in this book and have hinted at future possibilities in several
of those areas:

.GROUP;
.NL
%21.%1##Mathematical models: This refers to some of the theoretical areas which 
 use   LISP as the basis  for mathematical studies of algorithms,
equivalence of programs,  and program synthesis from formal specifications.
These are not  of  purely theoretical interest: correctness of non-trivial programs
is an important practical problem.
The issue  of programming language semantics also needs clarification.
This branch 
 of semantics seeks a descriptive tool for 
the meaning of constructs of a language; in particular, a tool of the power and
clarity of BNF descriptions of the syntax of languages. We have talked a bit
about semantic issues in {yonss(P85)}.  The close relationship between LISP
evaluators and denotational models is encouraging.
.apart;

.GROUP;
.NL
%22.%1##Generalized control structures: We hinted at some of the options
under consideration for control of algorithms. The work on 
generalized access and control, also known as "spaghetti#stacks" (⊗↑[Bob#73a]↑),
is of current interest and many of its 
motivations and implications should be more understandable now. The devices which
are required for such general control also come directly from the LISP experience.
Devices like "spaghetti#stacks" serve for implementation  of higher level
language constructs like pattern directed invocation.
.APART;
.GROUP;
.NL
%23.%1##Interpreter/compilers: This is an interesting area which begins to resolve
the dichotomy between compilation and interpretation. Work was done in ⊗↑[Mit#70]↑,
but little has been done since. Again, LISP is a natural 
vehicle for discussing this topic.
.APART
.GROUP;
.NL
%24.%1##Implementation tricks and machine organization: In the past
LISP has been the originator of several, now every-day, programming
tricks. Current production versions of LISP continue to develop
sophisticated techniques for management of very large spaces, representation of
data structures, and execution of complex algorithms. Many of those
ideas have direct implications for the development of hardware for 
LISP like languages.
.APART
.GROUP;
.NL
%25.%1##Languages for Artificial Intelligence research: Though LISP is
thought of primarily as the language for Artificial Intelligence
programming, we have barely touched on those applications.
In the last decade, LISP has really become a systems programming language,
rather than a research tool. It is the systems language for developing more
powerful languages for A.I. They are "more powerful" in the sense of descriptive
power, rather  than computational power.  LISP is also used exclusively
as the systems programming language on the M.I.T. LISP machine.
.APART
.GROUP;
.NL
%26.%1##Interaction and personal computation. Though LISP developed
in the late 1950's, contemporary implementations are finally
exploiting the true interactive nature of the language. A LISP machine
is a sophisticated and powerful calculator. The language is the most
interactive of the major programming languages, and as such, should attract
the interest of the personal computer field. This should see
the development of sophisticated %7m%1-computer LISP implementations.
The combination of LISP, inexpensive hardware, and creative minds should be
interesting to watch.
.pt18
.APART
The main purpose of this epilog is to tie together most of the material
we have studied. The underlying thread in our study has been the internal and
external behavior of LISP. A rather natural  vehicle to unify these  topics
is the design of a new LISP-like language.  Language design is not a pastime
to be entered into lightly; we will therefore sketch an existing LISP extension
named ⊗→EL1↔←.
The name EL1 is derived from Extensible  Language/1.

There are two basic views in programming language design:
one approach is to  design a small language, called a base language,
which has sufficent expressive power to allow its user to 
mold a special language from that base. This is called
the "core" approach and such base languages are called extensible languages.
The alternative, called the "shell" approach, is 
to design a full language, capable of covering a specific area. That
area may only cover a special domain of algorithms or might
encompass %6all%1 algorithmic processes. 



The "shell" approach to general purpose languages is  
best exemplified by ⊗→PL/1↔←.
This approach attempts to build a language which encompasses all the
Pl/1 is an
 The approach gives rise
to many problems. Of necessity, the language is large; unless care is taken
a programmer will have difficulties in learning the language. Even if
a small subset is presented to a beginner, the occurrence of bugs
in a user program may cause mysterious results if those
bugs happen to invoke features outside the subset.
Also the language implementor is in for a hard time; language processors
will have to be cognizant of %6all%1 the language features even  if
the user wishes to work within a small subset. The problems of
optimization are compounded immensely, since the interaction of language
features may well lead to torturous code. Though
the "shell" approach presents severe problems, the "core" approach
of extensibility is not without flaw. There are non-trivial research
areas involved in developing
smooth and
efficient means of describing the syntax, pragmatics and semantics
of user defined data structures, control structures and operations.

An ⊗→extensible language↔← is designed to supply a base  language, which has
sufficient handles such that a user can describe new data structures, new operations,
and new control structures. These new  objects are  described in terms of 
combinations of constructs in the ⊗→base language↔←. Extensibility is implicitly
committed to the premiss that the power of high level languages is
primarily notational rather than computational. 
That is apparent from our experience with high level numerical languages.
Their notations allow us to express our problems in mathematics-like
statements, rather than coding in  machine language.


Like LISP, the extensible language  EL1 maps programs
onto data structures. EL1 has richer data types including integers, characters, 
pointers, and structures. Its syntax is described in BNF and a mapping from
well-formed syntactic units to data structures is given. The EL1 evaluator
is written in EL1 and manipulates the data-structure representation of EL1 programs
in a manner totally analogous to the LISP %3eval%* function.

The syntax of EL1  is similar to that of M-expression LISP.
The details are not relevant and are best left to the user's manual, ⊗↑[EL1#74]↑.
What %6is%1 important is the interrelationships between the constucts of the
language and their data structure representations.
That is, we wish to develop a representation of the abstract syntax of EL1 using
the data structures available in EL1.  Our approach here is the other way round:
to motivate the data structures of a language by the requirements for
expressing a realistic evaluator.⊗↓Compare the  
following discussion with {yonss(P138)}.← 


.BEGIN TABIT1(18);GROUP;
.BOXA
Consider this fragment of the LISP syntax from {yon(P66)}:
.PT2
<form>\::= <constant> | <application> |  <variable>
.PT2
<application>\::= <function-part>[<arg-list>]
.PT2
<arg-list>\::= <arg> | <arg-list>;<arg>
.BOXB
.END
.FP
These  equations demonstrate the three kinds of BNF equations.
We will  concentrate
our attention on  the last two equations.

The LISP M-to-S-expression mapping will map an <application> like
%3f[x;y;z]%1 onto %3(F X Y Z)%1. For all intents and purposes, LISP has
little choice; LISP has few representations available and
since we wish to use the S-expr 
representation as the programming language, the representation must be readable.
In the typical implementations of LISP, 
the representation of %3f[x;y;z]%1 is %3(F#.#(X#.#(Y#.#(X#NIL))))%1.
That requires a lot of space, and requires some decoding by any program which
is to use this representation. 
If we look closely at the storage requirements for <application>'s and
<arg-list>'s,
we see that there are differences.

The representation of an <application> has %6fixed%1 storage requirements;
it  demands space for two components: a <function-part> component, and a 
<arg-list> component. We have seen such storage structures before
in {yonss(P138)}; they are called record structures.
The name components of the record structure can be %3fun%1 and %3args%1,
and the selector functions are implemented by matching  the name components.
Note that the use of record structure is a bit freer than LISP's
list representation; we need not make a commitment to the position of 
the %3fun%1 component in the representation.


The requirements for <arg-list> are different. An arbitrary <arg-list>
has a variable number of components. Each component has the same characteristic:
it's an <arg>. We can represent 
a homogeneous object like <arg-list> as a sequence whose
length is fixed. A natural storage class for such sequences is
a linear array, each component of which is an item of the sequence.
Information about the length of the sequence, and the class to which
elements of the sequence belong, can be stored in the "dope#vector" of the
representation.

What we are developing is a description of a class of storage representations
for language constructs. This class of  structures covers the
space  which LISP covers, but partitions it differently. More information is stored
explicitly, and the representations are more discriminating in their
storage requirements. Assuming that the resulting structures are made data 
structures of the language, we can  then write a LISP-like %3eval%1 which
runs on these data structures.

.GROUP;
Our refinement is not without penalty. We are in fact imposing a type
structure on the language. We know that
such  restrictions are not always desireable. 
The type structure becomes more apparent when we consider the 
remaining syntax equation:

.BEGIN CENTER;GROUP;
.BOXA
<form> ::= <constant> | <application> |  <variable>
.BOXB
.END
.APART
.FP
Consistent with our treatment of record structures and sequences, we should
develop some  representation for <form>.
In LISP, no storage was allocated for the representation of such
alternative BNF equations; the recognition was done by recognizers embedded
in a conditional expression:
.BEGIN GROUP;SELECT 3;TABIT1(20);
.BOXA
\[is-const[x] → ...
\ is-app[x] → ...
\ is-var[x] →
\ %et%1 →  %1 ... what to do if %3x%1 is %6not%1 a form ...
.BOXA
.END

In EL1, every data item has an associated type. Since  we are representing
language constructs as data structures they will also have associated types.
To determine if something is an <application> requires only that we
examine the associated type.  The question then arises: what kind of object
is to be associated with an object like <form>? At any one time 
an object of type <form> is one of the three alternatives. The EL1 solution
is to assign a pointer-like data object as the representation of such objects.
The type of the pointer is constrained to point at one of the alternatives.


.GROUP;
Let's compare LISP:
Given an application %3f[x;A;1]%1 we represent it as a constant %3(F#X#(QUOTE#A)#1)%1.
That is:
.BOXA
.BEGIN CENTER;
%9r%4LISP%f(%3f[x;A;1]%f)%1 = %3(F X (QUOTE A) 1)%1 
.BOXB
.END
.FP
We wish to represent this application as a constant in EL1 as well. We need some
notation for record structures and sequence constants.
A record constant of type %3t%1 will be represented
as <%4t%1#c%41%1;#...#;c%4n%1>.⊗↓%1We have suppressed the explicit
naming of each component of the record. We assume that
a "template" of each type %3t%1 is available. That template
can be consulted to determine which component is referenced.←  
A sequence of type %3t%1 will be represented
as (%4t%1s%41%1;#...#;s%4n%1). 

.BEGIN TABIT2(34,43);
.BOXA
Then %9r%4EL1%f(%3f[x;A;1]%f)%1 = <%4app\%9r%4EL1%f(%3f%f)%1; 
\(%4arg-list\%9r%4EL1%f(%3x%f)%1; 
\\%9r%4EL1%f(%3A%f)%1;
\\%9r%4EL1%f(%31%f)%1 ) >
.BOXB
.END
.FP
We have suppressed much of the detail because each of the components
of the representation must also have type information.

.APART
The structured items in LISP are built from dotted pairs; the structured items
in EL1 are built from sequences (or lists), from records (or structures), and
from pointers (or references). The difference is that the EL1 user has more
choice over the underlying representation. This can lead to more
efficient utilization of storage and perhaps more efficient programs.
Since the evaluator is  just an EL1 program, these 
considerations carry over to the evaluation process.
EL1 allows us to %6represent%1 the data structures of the evaluation
process in terms closer to actual implementation. Both LISP and EL1
allow us to %6express%1 realistic implementations, however
LISP may  ultimately represent its structures  as dotted pairs.

This realism of representation carries over to the evaluation process
which runs on the representation. The language is capable of 
accurately representing more of the techniques which 
occur in a language implementation. The language supplies storage management
primitives which allow  the creation of stack-like objects as well as the
heap-stored items of LISP. 

The language offers the user the ability to  %6define%1 abstract data structures
in a manner similar to that we have been advocating informally. Given the
finer partition of storage structures, the user can  map those
structures onto more frugal representations than LISP, and since the type-checking
is built into the language, the  language processors can check the consistency
of the  parameter passing.  Relating the abstract with the representation requires
some care. Supplying a comfortable interface between these two domains is a
non-trivial problem. EL1 supplies "lifting" and "lowering" mechanisms
to aid in this problem; the result is not completely satisfactory.

We have frequently seen how easy it has been to extend LISP by modifying %3eval%*.
This is particularly easy because we are making modifications at the 
level of data-structure representation of programs. 
In EL1 we wish to make the extensions at the level of concrete syntax, 
rather than abstract syntax as LISP does.⊗↓To program in the abstract syntax
would be possible, but messy. We would be writing constants consisting of
pointers, records, and sequences, rather than the list notation of LISP.← 
EL1 can do this by using a parser which is table-driven,
and
operates  on a  modifiable set of productions. 
To introduce a new construct into the language we need to supply
the concrete syntax for the item, its abstract systax, 
a mapping from concrete to abstract, and its pragmatics expressed
as additions to the evaluator.
However there may be wider implications in the language and more
general features are required.⊗↓For example, if new control 
structures are desired, major revision of the
inner structure of the language may be necessary (⊗↑[Pre#76b]↑).← 
The field of language design is still
quite young and tentative.


Though LISP is a full twenty years old, it is still a fertile research
area.
Projects are extending LISP  in several directions.
⊗↑[Car#76]↑  investigates  the possibilities of adding a type-structure
to LISP, giving a strong-typed programming language and a precise
formalism in which properties of Typed LISP programs can be discussed.
This project supplies an Algol-like user language as well as
develops interesting theoretical results.

Several  people have supplied  parsers which give the user an Algol-like 
syntax (⊗↑[Mli#73]↑, ⊗↑[Pra#76]↑). Extensions to the ideas of SDIO
{yonss(p105)} are being pursued.

Programming language semantics  is being coupled with the  realities
of programming language design in a successor of ⊗↑[LCF#72]↑.
This is being built on the LISP experience.

 LISP is  the direct  source of inspiration for
two current MIT projects.
For several years  C.#Hewitt has been developing
an interesting philosophy of computation. Building on his experience with LISP,
he developed  PLANNER ⊗↑[Hew#72]↑, and more recently refined thse ideas in a
methodology called Actors (⊗↑[Hew#74]↑, ⊗↑[Hew#76]↑).  One of the aims of these investigations
is to develop a self-sufficient description of modern computation much in the
way that the %9λ%1-calculus gave a foundation to the notion of computable
function. The goal therefore is much higher than  to develop just another
programming language. Along the way, however, these projects have
developed  some  of the most notable ideas of advanced programming languages.
From the
 PLANNER experience, partially implemented in ⊗↑[Mic#71]↑, 
evolved the ideas of pattern-directed invocation; see {yonss(P220)}.
This    gave  PLANNER a new way to call procedures, and its implementation
in Micro-PLANNER gave  researchers new power of expression much in the way that
LISP improved  over machine language several years before.  These investigations generated much
controversy and stimulated more research in language design for artificial
intelligence; see ⊗↑[Bob#74]↑, ⊗↑[Con#73]↑, ⊗↑[QA4#72]↑.
The lessons of PLANNER led to Actors  and a   study of the control aspects of 
programming languages. 
From these investigations Hewitt has  developed a model which
looks on control as an activity of passing message between program
modules.
Recently, Hewitt's efforts have again stimulated others to examine
programming languages more closely. 


G. Sussman and G. Steele have been developing a language named SCHEME
(⊗↑[Sus#75]↑, ⊗↑[Ste#76b]↑, and ⊗↑[Ste#76c]↑) which was begun as an
experiment to understand  and implement several of the ideas of Hewitt.
The result is a dialect of LISP which uses  static binding rather than dynamic 
binding. The interpreter is built in the spirit of ⊗↑[Con#73]↑ and
is similar to the evaluator  of {yonss(P211)}.
The result is an interpreter which makes  the control aspects    of the
language much more explicit.  Using SCHEME, Steele and Sussman have
been able to illuminate many of the  control and access problems of 
programming languages. 

In these two projects we see two views of computation: the philosophical
and the tool builder. Both  are important;  together they are  developing
an impressive  array of knowledge.

Finally, LISP is being used as an effective tool for the design of 
interactive programming systems. The successful development of 
programming systems which integrate all phases of program creation,
debugging and optimization, will be based on LISP user's experience.


Many people find it curious that LISP has survived so long and so well.
It is not supported by any organization or computer manufacturer
yet it  flourishes and continues to attract many of the most
exceptional computer science talents.  LISP does a lot of things well.
As a programming language, it is an exceptional tool for developing
sophisticated applications. The artificial intelligence community has always
been one of the most demanding and  creative builders of programming tools.
LISP's  treatment of program and data  supports this kind of behavior.

Until we can develop a tool which handles any of these areas as well 
as LISP, LISP will survive.